LNCS Homepage
CD ContentsAuthor IndexSearch

A Comparison of Hybrid Incremental Reuse Strategies for Reinforcement Learning in Genetic Programming

Scott Harmon, Edwin Rodríguez, Christopher Zhong, and William Hsu

Department of Computing and Information Sciences, Kansas State University
sjh4069@cis.ksu.edu
edwin@cis.ksu.edu
czh9768@cis.ksu.edu
bhsu@cis.ksu.edu

Abstract. Easy missions is an approach to machine learning that seeks to synthesize solutions for complex tasks from those for simpler ones. ISLES (Incrementally Staged Learning from Easier Subtasks) [1] is a genetic programming (GP) technique that achieves this by using identified goals and fitness functions for subproblems of the overall problem. Solutions evolved for these subproblems are then reused to speed up learning, either as automatically defined functions (ADF) or by seeding a new GP population. Previous positive results using both approaches for learning in multi-agent systems (MAS) showed that incremental reuse using easy missions achieves comparable or better overall fitness than single-layered GP. A key unresolved issue dealt with hybrid reuse using ADF with easy missions. Results in the keep-away soccer (KAS) [2] domain (a test bed for MAS learning) were also inconclusive on whether compactness-inducing reuse helped or hurt overall agent performance. In this paper, we compare reuse using single-layered (with and without ADF) GP and easy missions GPs to two new types of GP learning systems with incremental reuse.

LNCS 3103, p. 706 f.

Full article in PDF


lncs@springer.de
© Springer-Verlag Berlin Heidelberg 2004